We present a framework for ranking images within their class based on the strength of spurious cues present. By measuring the gap in accuracy on the highest and lowest ranked images (we call this spurious gap), we assess spurious feature reliance for $89$ diverse ImageNet models, finding that even the best models underperform in images with weak spurious presence. However, the effect of spurious cues varies far more dramatically across classes, emphasizing the crucial, often overlooked, class-dependence of the spurious correlation problem. While most spurious features we observe are clarifying (i.e. improving test-time accuracy when present, as is typically expected), we surprisingly find many cases of confusing spurious features, where models perform better when they are absent. We then close the spurious gap by training new classification heads on lowly ranked (i.e. without common spurious cues) images, resulting in improved effective robustness to distribution shifts (ObjectNet, ImageNet-R, ImageNet-Sketch). We also propose a second metric to assess feature reliability, finding that spurious features are generally less reliable than non-spurious (core) ones, though again, spurious features can be more reliable for certain classes. To enable our analysis, we annotated $5,000$ feature-class dependencies over {\it all} of ImageNet as core or spurious using minimal human supervision. Finally, we show the feature discovery and spuriosity ranking framework can be extended to other datasets like CelebA and WaterBirds in a lightweight fashion with only linear layer training, leading to discovering a previously unknown racial bias in the Celeb-A hair classification.
translated by 谷歌翻译
缺乏现实世界中深度神经网络可靠性缺乏可靠性的关键原因,他们对与真正标签的因果关系不相关的虚假输入特征的繁重依赖。专注于图像分类,我们将因果属性定义为始终是对象的一部分的可视特征,而杂散属性是可能与对象一起发生但不是它(例如,属性“的一部分手指“课堂”乐队援助“)。用于发现虚假功能的传统方法需要广泛的人类注释(因此,不可扩展),或者对特定模型有用。在这项工作中,我们介绍了一种可扩展的框架,以发现在一般模型推断下使用的虚假和因果视觉属性的子集,并在大量图像上本地化,具有最小的人类监督。我们的方法基于该关键的想法:识别模型预测中使用的虚假或因果视觉属性,我们通过有限的人类监督识别虚假或因果关系(鲁棒模型的倒数二次层神经元)(例如,使用每一个激活图像特征)。然后,我们认为这些神经特征注释在没有任何人类监督的情况下概括到更多图像。我们使用这些神经功能的激活图作为软掩模,以突出虚假或因果视觉属性。使用这种方法,我们介绍了来自想象成的大量样本的因果和虚假掩模的因果态图像。我们评估了几种流行的想象成型模型的表现,并表明他们严重依赖于他们预测中的各种杂散特征。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Cutting-edge diffusion models produce images with high quality and customizability, enabling them to be used for commercial art and graphic design purposes. But do diffusion models create unique works of art, or are they stealing content directly from their training sets? In this work, we study image retrieval frameworks that enable us to compare generated images with training samples and detect when content has been replicated. Applying our frameworks to diffusion models trained on multiple datasets including Oxford flowers, Celeb-A, ImageNet, and LAION, we discuss how factors such as training set size impact rates of content replication. We also identify cases where diffusion models, including the popular Stable Diffusion model, blatantly copy from their training data.
translated by 谷歌翻译
Recommender systems are ubiquitous in most of our interactions in the current digital world. Whether shopping for clothes, scrolling YouTube for exciting videos, or searching for restaurants in a new city, the recommender systems at the back-end power these services. Most large-scale recommender systems are huge models trained on extensive datasets and are black-boxes to both their developers and end-users. Prior research has shown that providing recommendations along with their reason enhances trust, scrutability, and persuasiveness of the recommender systems. Recent literature in explainability has been inundated with works proposing several algorithms to this end. Most of these works provide item-style explanations, i.e., `We recommend item A because you bought item B.' We propose a novel approach, RecXplainer, to generate more fine-grained explanations based on the user's preference over the attributes of the recommended items. We perform experiments using real-world datasets and demonstrate the efficacy of RecXplainer in capturing users' preferences and using them to explain recommendations. We also propose ten new evaluation metrics and compare RecXplainer to six baseline methods.
translated by 谷歌翻译
Tasks critical to enterprise profitability, such as customer churn prediction, fraudulent account detection or customer lifetime value estimation, are often tackled by models trained on features engineered from customer data in tabular format. Application-specific feature engineering adds development, operationalization and maintenance costs over time. Recent advances in representation learning present an opportunity to simplify and generalize feature engineering across applications. When applying these advancements to tabular data researchers deal with data heterogeneity, variations in customer engagement history or the sheer volume of enterprise datasets. In this paper, we propose a novel approach to encode tabular data containing customer transactions, purchase history and other interactions into a generic representation of a customer's association with the business. We then evaluate these embeddings as features to train multiple models spanning a variety of applications. CASPR, Customer Activity Sequence-based Prediction and Representation, applies Transformer architecture to encode activity sequences to improve model performance and avoid bespoke feature engineering across applications. Our experiments at scale validate CASPR for both small and large enterprise applications.
translated by 谷歌翻译
视觉问题回答(VQA)是一项多模式的任务,涉及从输入图像中回答问题,以语义了解图像的内容并以自然语言回答。由于VQA系统回答的问题范围,使用VQA进行灾难管理是一项重要的研究。但是,主要的挑战是评估受影响地区的标签产生的延迟。为了解决这个问题,我们部署了预先训练的剪辑模型,该模型在视觉图像对中进行了训练。但是,我们从经验上看到该模型的零击性能差。因此,我们相反,我们使用此模型中的文本和图像的预训练嵌入,进行我们的监督培训,并超过Floodnet数据集上的先前最新结果。我们将其扩展到持续的设置,这是一种更现实的情况。我们解决了使用各种经验重播方法的灾难性遗忘的问题。我们的培训运行可在以下网址提供:https://wandb.ai/compyle/continual_vqa_final
translated by 谷歌翻译
深度神经网络(DNN)几乎在商业,技术和科学上几乎普遍存在计算机视觉任务中实现了前所未有的表现。尽管为高度准确的体系结构而做出了大量的努力并提供了可用的模型解释,但大多数最先进的方法首先是为自然视觉设计的,然后转换为医疗领域。本论文旨在通过提出新的体系结构来解决这一差距,这些新型体系结构将医学成像的特定域约束纳入DNN模型和解释设计。
translated by 谷歌翻译
混合整数程序(MIP)通常通过分支结合算法解决。最近,学会模仿专家强的分支启发式的快速近似,由于它成功地减少了解决MIP的运行时间,因此引起了人们的关注。但是,现有的学习与分支方法假设整个培训数据都可以在一次培训中获得。这个假设通常不正确,如果随着时间的推移以连续的方式提供培训数据,现有技术会遭受灾难性遗忘。在这项工作中,我们研究了迄今未开发的终身学习范式,以在混合整数程序上分支。为了减轻灾难性的遗忘,我们提出了Limip,该limip是由以两部分图的形式对MIP实例进行建模的想法,我们使用双方图形注意力网络将其映射到嵌入式空间。这种丰富的嵌入空间避免了通过应用知识蒸馏和弹性重量巩固的灾难性遗忘,其中我们学习参数的关键是保持疗效,因此受到保护,免受明显的漂移。我们评估了一系列NP硬性问题的利润,并确定与现有基线相比,在面对终身学习时,Limip的速度高达50%。
translated by 谷歌翻译
建模是什么使广告有说服力的原因,即引起消费者的所需响应,对于宣传,社会心理学和营销的研究至关重要。尽管其重要性,但计算机视觉中说服力的计算建模仍处于起步阶段,这主要是由于缺乏可以提供与ADS相关的说服力标签的基准数据集。由社会心理学和市场营销中的说服文学的激励,我们引入了广泛的说服策略词汇,并建立了用说服策略注释的第一个AD图像语料库。然后,我们通过多模式学习制定说服策略预测的任务,在该任务中,我们设计了一个多任务注意融合模型,该模型可以利用其他广告理解的任务来预测说服策略。此外,我们对30家财富500家公司的1600个广告活动进行了真实的案例研究,我们使用模型的预测来分析哪些策略与不同的人口统计学(年龄和性别)一起使用。该数据集还提供图像分割掩码,该蒙版在测试拆分上标记了相应的AD图像中的说服力策略。我们公开发布代码和数据集https://midas-research.github.io/persuasion-avertisements/。
translated by 谷歌翻译